Explaining Health Across the Sciences by Unknown

Explaining Health Across the Sciences by Unknown

Author:Unknown
Language: eng
Format: epub
ISBN: 9783030526634
Publisher: Springer International Publishing


16.5 The Pathosome and Bioinformatic Approaches

The preceding parts of this chapter introduced the concept of the dynamic pathosome as well as its implication and potential uses. However, some may wonder if such a framework has any use since we may simply use machine learning independently of any underlying theory. To show that frameworks such as the dynamic pathosome are still necessary, we will first highlight problems associated with machine learning and then list some benefits of the dynamic pathosome and how can it be used to guide machine learning algorithms.

Machine learning algorithms are often only as good as the data used to train them. The famous example is a Tay twitter bot developed to engage with people from 18 to 24. This bot was programmed to learn from the behavior of other users and as a result, in a mere 12 h, it went from its first innocuous tweet “hellooooooo world!!!” to stating that feminists “should all die and burn in hell” (Garcia 2016). Moreover, machine learning is extremely prone to arrive at trivial conclusions and recapitulate common prejudices and even racism. This is a great problem as machine learning algorithms often only report their results and people using them are often unaware that these results are based on wrong associations. A very troubling example is a finding that that the proprietary algorithms widely used by judges in the USA to help determine the risk of reoffending are almost twice as likely to mistakenly flag black defendants than white defendants (Crawford and Calo 2016; Julia Angwin 2016). What is worse, these algorithms are not particularly precise and predicted recidivism wrongly in 39% of the cases. It is quite easy to imagine that similar problems could be encountered in biomedical research but the type of data and “black box” nature of machine learning would make the detection of such problems even harder.

Another problem, this time concerning mostly deep-learning algorithms, is that they are surprisingly easy to fool. For example, an artificial intelligence (AI) trained to recognize traffic signs can be tricked to misread stop signs as e.g., speed limits by few rectangles pinned to the sign (Eykholt et al. 2018) or an AI trained to see images of animals can be fooled to recognize a lion in a white noise (with 99.99% certainty) (Nguyen et al. 2015). These results have profound implications for biomedicine because the great variability inherent to almost every aspect of biology makes the likelihood of similar mistakes quite high. Furthermore, a recent study shows that pixels added to a picture of a medical scan can fool a deep learning neural network to wrongly diagnose cancer (Finlayson et al. 2019). The exploitation of this finding poses a serious security threat to hospitals. Pictures altered by few pixels are almost indistinguishable from the originals by human eyes and thus there is almost no chance that hospital staff would recognize any wrongdoing while the consequences of such alteration could be fatal for patients. Therefore, while deep learning may be beneficial



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.